============================================================================== Simba Databricks JDBC Driver Release Notes ============================================================================== The release notes provide details of enhancements, features, known issues, and workflow changes in Simba Databricks JDBC Driver 2.7.2, as well as the version history. 2.7.2 ======================================================================== Released 2025-04-16 Enhancements & New Features * [SPARKJ-807] Upgraded IP range support The OAuthEnabledIPAddressRanges setting now allows overriding the OAuth private link. For more information, see the Installation and Configuration Guide. * [SPARKJ-942] Refresh token support Refresh token support is now available. This enables the driver to automatically refresh authentication tokens using the Auth_RefreshToken property. For more information, see the Installation and Configuration Guide. * [SPARKJ-952] UseSystemTrustStore support The connector supports the use of system’s trusted store with a new UseSystemTrustStore property. When enabled (UseSystemTrustStore=1), the driver verifies connections using certificates from the system’s trusted store. For more information, see the Installation and Configuration Guide. * [SPARKJ-952] UseServerSSLConfigsForOAuthEndPoint support The connector supports UseServerSSLConfigsForOAuthEndPoint property that, when it is enabled, allows clients to share the driver’s SSL configuration for the OAuth endpoint. For more information, see the Installation and Configuration Guide. * [SPARKJ-687] OAuth 2.0 Azure Managed Identity authentication support The connector now supports Azure Managed Identity OAuth 2.0 authentication. To do this, set the Auth_Flow property to 3. For more information, see the Installation and Configuration Guide. * [SPARKJ-958] VOID data type support The connector now supports the Void data type in getColumns() and getTypeInfo() API calls. For more details, see: https://docs.databricks.com/aws/en/sql/language-manual/data-types/null-type * [SPARKJ-995] Variant data type support The connector now supports the Variant data type in getColumns() and getTypeInfo() API calls. For more details, see: https://docs.databricks.com/en/sql/language-manual/data-types/variant-type.html * [SPARKJ-1002] Token cache support The OAuth Browser (Auth_flow=2) now offers token caching support for Linux and Mac operating systems. * [SPARKJ-1014] Updated netty libraires The connector has been upgraded with the following netty libraries: - netty-buffer 4.1.119.Final (previously 4.1.110.Final) - netty-common 4.1.119.Final (previously 4.1.110.Final) * [SPARKJ-1052] Unknown types handling support The connector now lists columns with unknown or unsupported types and maps them to SQL VARCHAR in the GetColumns() metadata API call. * [SPARKJ-971] Query ID support The connector logs now contain query ID. * [SPARKJ-875] TIMESTAMP_NTZ data type support The connector now supports the TIMESTAMP_NTZ data type. For more details, see: https://docs.databricks.com/aws/en/sql/language-manual/data-types/timestamp-ntz-type Resolved Issues The following issues have been resolved in Simba Databricks JDBC Driver 2.7.2. * [SPARKJ-642] When using IBM JRE and the Arrow result set serialization feature, the connector now handles the Unicode characters correctly. * [SPARKJ-643] Complete error messages and causes for error code 401 are now returned. * [SPARKJ-713] Heartbeat threads no longer leak when connections are created using the DataSource class. * [SPARKJ-749] A change in cloud fetch request list has been applied to manage memory usage better. * [SPARKJ-894] The translation issue with coalesce in the group by has been resolved. * [SPARKJ-940] A potential OAuth2Secret leak in the driver log has been resolved. * [SPARKJ-949] Tag mismatch error in OAuth U2M authentication (Auth_Flow=2) has been fixed. * [SPARKJ-926] The useCustomTimestampConverter connection property fails to be sent to the server as an SSP property. * [SPARKJ-1005] Corrected statement about HTTP proxy support in the Installation and Configuration Guide. Known Issues The following are known issues that you may encounter due to limitations in the data source, the driver or an application. * [SPARKJ-573] Issue when deserializing Apache Arrow data with Java JVMs version 11 or higher, due to compatibility issues. As a workaround, if you encounter the "Error occurred while deserializing arrow data: sun.misc.Unsafe or java.nio.DirectByteBuffer.(long, int) not available" error, add the follwing line: --add-opens java.base/java.nio=ALL-UNNAMED For more information, see the Installation and Configuration Guide. * [SPARKJ-330] Issue with date and timestamp before the beginning of the Gregorian calendar when connecting to Spark 2.4.4 or later, or versions previous to 3.0, with Arrow result set serialization. When using Spark 2.4.4 or later, or versions previous to Spark 3.0, DATE and TIMESTAMP data before October 15, 1582 may be returned incorrectly if the server supports serializing query results using Apache Arrow. This issue should not impact most distributions of Apache Spark. To confirm if your distribution of Spark 2.4.4 or later has been impacted by this issue, you can execute the following query: SELECT DATE '1581-10-14' If the result returned by the connector is 1581-10-24, then you are impacted by the issue. In this case, if your data set contains date and/or timestamp data earlier than October 15, 1582, you can work around this issue by adding EnableArrow=0 in your connection URL to disable the Arrow result set serialization feature. * [SPARKJ-267] The JDBC 4.0 version of the connector fails to connect to servers that require encryption using TLS 1.1 or later. When you attempt to connect to the server, the connection fails and the connector returns an SSL handshake exception. This issue occurs only when you run the connector using Java Runtime Environment (JRE) 6.0. As a workaround, run the connector using JRE 7.0 or 8.0. * When retrieving data from a BINARY column, a ClassCastException error occurs. In Spark 1.6.3 or earlier, the server sometimes returns a ClassCastException error when attempting to retrieve data from a BINARY column. This issue is fixed as of Spark 2.0.0. For more information, see the JIRA issue posted by Apache named "When column type is binary, select occurs ClassCastException in Beeline" at https://issues.apache.org/jira/browse/SPARK-12143. Workflow Changes ============================================================= The following changes may disrupt established workflows for the connector. 2.7.1 ----------------------------------------------------------------------- * [SPARKJ-885] Renamed username and password authentication Beginning with this release, AuthMech 3 uses PAT (Personal Access Token) authentication. Previously, it was known as username and password authentication. It only accepts 'token' values for UIDs. For more information, see the Installation and Configuration Guide. Version History ============================================================== 2.7.0 ----------------------------------------------------------------------- Released 2024-11-21 * [SPARKJ-951] Updated third-party library The connector has been upgraded with the following third-party libraries: - Apache Arrow 17.0.0 (previously 14.0.2) - flatbuffers 24.3.25 (previously 23.5.26) - jackson-annotations-2.17.1 (previously 2.16.0) - jackson-core-2.17.1 (previously 2.16.0) - jackson-databind-2.17.1 (previously 2.16.0) - jackson-datatype-jsr310-2.17.1 (previously 2.16.0) - netty-buffer 4.1.110.Final (previously 4.1.94.Final) - netty-common 4.1.110.Final (previously 4.1.94.Final) - commons-codec 1.17.0 (previously 1.15.0) Google Guava library has been removed from dependencies. Note: Version 2.7.0 is the initial release of Simba Databricks JDBC Data Driver. ==============================================================================